367 research outputs found
An information integration theory of consciousness
BACKGROUND: Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? PRESENTATION OF THE HYPOTHESIS: This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Φ value of a complex of elements. Φ is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Φ>0 that is not part of a subset of higher Φ. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex. TESTING THE HYPOTHESIS: The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness. IMPLICATIONS OF THE HYPOTHESIS: The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts
What caused what? A quantitative account of actual causation using dynamical causal networks
Actual causation is concerned with the question "what caused what?" Consider
a transition between two states within a system of interacting elements, such
as an artificial neural network, or a biological brain circuit. Which
combination of synapses caused the neuron to fire? Which image features caused
the classifier to misinterpret the picture? Even detailed knowledge of the
system's causal network, its elements, their states, connectivity, and dynamics
does not automatically provide a straightforward answer to the "what caused
what?" question. Counterfactual accounts of actual causation based on graphical
models, paired with system interventions, have demonstrated initial success in
addressing specific problem cases in line with intuitive causal judgments.
Here, we start from a set of basic requirements for causation (realization,
composition, information, integration, and exclusion) and develop a rigorous,
quantitative account of actual causation that is generally applicable to
discrete dynamical systems. We present a formal framework to evaluate these
causal requirements that is based on system interventions and partitions, and
considers all counterfactuals of a state transition. This framework is used to
provide a complete causal account of the transition by identifying and
quantifying the strength of all actual causes and effects linking the two
consecutive system states. Finally, we examine several exemplary cases and
paradoxes of causation and show that they can be illuminated by the proposed
framework for quantifying actual causation.Comment: 43 pages, 16 figures, supplementary discussion, supplementary
methods, supplementary proof
The Neural Correlates of Consciousness - An Update
This review examines recent advances in the study of brain correlates of consciousness. First, we briefly discuss some useful distinctions between consciousness and other brain functions. We then examine what has been learned by studying global changes in the level of consciousness, such as sleep, anesthesia, and seizures. Next we consider some of the most common paradigms used to study the neural correlates for specific conscious percepts and examine what recent findings say about the role of different brain regions in giving rise to consciousness for that percept. Then we discuss dynamic aspects of neural activity, such as sustained versus phasic activity, feedforward versus reentrant activity, and the role of neural synchronization. Finally, we briefly consider how a theoretical analysis of the fundamental properties of consciousness can usefully complement neurobiological studies
Time to Be SHY? Some Comments on Sleep and Synaptic Homeostasis
Sleep must serve an essential, universal function, one that offsets the risk of being disconnected from the environment. The synaptic homeostasis hypothesis (SHY) is an attempt to identify this essential function. Its core claim is that sleep is needed to reestablish synaptic homeostasis, which is challenged by the remarkable plasticity of the brain. In other words, sleep is “the price we pay for plasticity.” In this issue, M. G. Frank reviewed several aspects of the hypothesis and raised several issues. The comments below provide a brief summary of the motivations underlying SHY and clarify that SHY is a hypothesis not about specific mechanisms, but about a universal, essential function of sleep. This function is the preservation of synaptic homeostasis in the face of a systematic bias toward a net increase in synaptic strength—a challenge that is posed by learning during adult wake, and by massive synaptogenesis during development
Measuring information integration
BACKGROUND: To understand the functioning of distributed networks such as the brain, it is important to characterize their ability to integrate information. The paper considers a measure based on effective information, a quantity capturing all causal interactions that can occur between two parts of a system. RESULTS: The capacity to integrate information, or Φ, is given by the minimum amount of effective information that can be exchanged between two complementary parts of a subset. It is shown that this measure can be used to identify the subsets of a system that can integrate information, or complexes. The analysis is applied to idealized neural systems that differ in the organization of their connections. The results indicate that Φ is maximized by having each element develop a different connection pattern with the rest of the complex (functional specialization) while ensuring that a large amount of information can be exchanged across any bipartition of the network (functional integration). CONCLUSION: Based on this analysis, the connectional organization of certain neural architectures, such as the thalamocortical system, are well suited to information integration, while that of others, such as the cerebellum, are not, with significant functional consequences. The proposed analysis of information integration should be applicable to other systems and networks
Is Sleep Essential?
No current hypothesis can explain why animals need to sleep. Yet, sleep is universal, tightly regulated, and cannot be deprived without deleterious consequences. This suggests that searching for a core function of sleep, particularly at the cellular level, is still a worthwhile exercise
When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents
An agent's actions can be influenced by external factors through the inputs
it receives from the environment, as well as internal factors, such as memories
or intrinsic preferences. The extent to which an agent's actions are "caused
from within", as opposed to being externally driven, should depend on its
sensor capacity as well as environmental demands for memory and
context-dependent behavior. Here, we test this hypothesis using simulated
agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve
to solve a perceptual-categorization task under conditions varied with regards
to the agents' sensor capacity and task difficulty. Using a novel formalism
developed to identify and quantify the actual causes of occurrences ("what
caused what?") in complex networks, we evaluate the direct causes of the
animats' actions. In addition, we extend this framework to trace the causal
chain ("causes of causes") leading to an animat's actions back in time, and
compare the obtained spatio-temporal causal history across task conditions. We
found that measures quantifying the extent to which an animat's actions are
caused by internal factors (as opposed to being driven by the environment
through its sensors) varied consistently with defining aspects of the task
conditions they evolved to thrive in.Comment: Submitted and accepted to Alife 2019 conference. Revised version:
edits include adding more references to relevant work and clarifying minor
points in response to reviewer
- …